The Case for Compulsory Approximation

نویسنده

  • Adrian Sampson
چکیده

Approximation is a fundamental concept in some application domains. In the next phase of research on approximate computing, the community should absorb lessons and constraints from these fields with compulsory approximation. This essay (1) surveys domains with compulsory approximation; and (2) advocates for research that builds new abstractions for old approximations. 1. Approximate Computing’s Adolescence Research on approximate computing has aged out of its infancy. We have an initial slate of approximate hardware designs, new compiler optimizations for approximation on current hardware, and language tools to control accuracy–efficiency trade-offs. But the field has taken only tentative steps toward broader impact: vexing but valid concerns, like the feasibility of quality guarantees, still hinder widespread adoption of approximate systems. As approximate computing begins to mature, the community should reflect on its strategy. Here are two reasonable visions for the next phase of approximation research: • Status quo. We keep developing general-purpose techniques for approximation and quality control. If we push the quality– efficiency trade-off far enough for enough benchmarks, industry will eventually adopt approximation. This direction involves only shallow engagement with individual application domains. • Technology transfer via case study. Researchers embed with specific application domains where we know approximation can be effective. We learn what it will take to “sell” our favorite techniques, from neural accelerators to stochastic logic circuits, to experts in those domains. These deployments will serve as case studies to inform and encourage broader adoption. This essay advocates for a third, less obvious strategy to complement these directions. Instead of developing new approximation strategies from whole cloth and then working to apply them, the community should go to where approximations are already widespread. In domains where approximation is a fact of life, not an optional luxury, there is no “selling” necessary: approximate computing is already deployed. We call these instances of approximate computing compulsory approximation. This essay makes the case for hunting approximation ideas in the wild. We should bring techniques from domains with compulsory approximation into the approximate-computing fold. The potential benefits are twofold: 1. The domains themselves stand to benefit from new abstractions for approximations they already use. Even established domainspecific approximation strategies can be ad hoc and difficult to control; the techniques and tools we have developed in the approximate-computing community can help make them more principled. 2. Approximate computing as a whole can benefit from expertise that is currently locked away within application domains. These domains are older andmoremature than approximate computing as a buzzword, so they have long contended with problems that our community is only beginning to recognize. We enumerate examples of domains with compulsory approximation and suggest ways that our community can engage with them. 2. Compulsory Approximation Domains This section surveys examples of compulsory approximation. We identify domain-specific approximations that (1) have something to teach us in the approximate-computing community; and (2) have problems that we can help address by applying approximation ideas. 2.1. Machine Learning Machine learning’s charter is to approximate problems that we cannot solve exactly—or even precisely define. And while machine learning and AI have undergone a full-scale revolution over the last five years, they can still be difficult to trust. For example: deep-learning implementers use a technique called dropout, which randomly deletes neurons from networks during training [9]. Dropout appears to avoid overfitting, leading to a better training result—but it is hard to see exactly why it works or explain when it might go wrong. Similarly, Hogwild! is a technique for parallel stochastic gradient descent that ignores data inconsistency [6]; its sensitivity to the underlying machine’s memory model is unclear. Summing up these implementation problems and more, a Google paper recently called machine learning “the high-interest credit card of technical debt”: while ML can accomplish amazing feats quickly, the reasons for its success (and, eventually, failure) can be murky [7]. We can learn: Processes and policies for measuring quality and deciding when it is good enough to ship. We can offer: Tools for expressing and enforcing quality requirements in the language, especially when composing learning components into larger systems. 2.2. Numerical & Scientific Computing Floating-point numbers are the world’s oldest and most widespread deployment of approximate computing. In scientific computing, the practice of bounding floating-point error is just as mature. The floating-point error problem has spawned an entire field of study, numerical analysis, and many textbooks devoted to the topic [1]. Numerical computing represents an extraordinarily thorough case study in high-overhead, manual approaches to deriving strong accuracy properties. We can learn: Mathematical tools for deriving hard error bounds when the error model resembles floating-point rounding. The same techniques may not generalize to other approximation strategies— random bit flips or neural accelerators, for example—but they can still be valuable when errors are well behaved. We can offer: Tools to automate tedious accuracy analyses and transformations that currently require experts. Panchekha et al.’s Herbie tool for fixing numerical stability problems is an important first step [5]. 2.3. Real-Time Graphics In games and other real-time graphics applications, everything is a compromise. Fundamentally, GPU-accelerated rasterization is a faster but worse alternative to full ray tracing. Game engines will go to extremes to maintain a smooth frame rate by simplifying scenes wherever possible.1 Level-of-detail (LOD) techniques simplify objects for quick-and-dirty rendering when they are rendered in the background of a complex scene [2]. We can learn: Low-overhead techniques for tuning accuracy at run time in response to resource demands. The soft real-time constraints in game engines require them to adapt to changing conditions; the same capability should apply in other domains. We can offer: Tools for navigating the huge space of individual approximation decisions that game developers typically implement.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fuzzy Best Simultaneous Approximation of a Finite Numbers of Functions

Fuzzy best simultaneous approximation of a finite number of functions is considered. For this purpose, a fuzzy norm on $Cleft (X, Y right )$ and its fuzzy dual space and also the  set of subgradients of a fuzzy norm are introduced. Necessary case of a proved theorem about characterization of simultaneous approximation will be extended to the fuzzy case.

متن کامل

Minimizing a General Penalty Function on a Single Machine via Developing Approximation Algorithms and FPTASs

This paper addresses the Tardy/Lost penalty minimization on a single machine. According to this penalty criterion, if the tardiness of a job exceeds a predefined value, the job will be lost and penalized by a fixed value. Besides its application in real world problems, Tardy/Lost measure is a general form for popular objective functions like weighted tardiness, late work and tardiness with reje...

متن کامل

The Basic Theorem and its Consequences

Let T be a compact Hausdorff topological space and let M denote an n–dimensional subspace of the space C(T ), the space of real–valued continuous functions on T and let the space be equipped with the uniform norm. Zukhovitskii [7] attributes the Basic Theorem to E.Ya.Remez and gives a proof by duality. He also gives a proof due to Shnirel’man, which uses Helly’s Theorem, now the paper obtains a...

متن کامل

Comparison of Exact Analysis and Steplines Approximation for Externally Excited Exponential Transmission Line

In the present paper, the problem of externally excited exponential transmission line hasbeen solved analytically in frequency domain using a simple approach. Then steplines approximationas a first order approximation for the problem of externally excited nonuniform transmission lines ingeneral and exponentially tapered transmission line (ETL) as a special case has been presented.Finally the tw...

متن کامل

Strategies Available for Translating Persian Epic Poetry: A Case of Shahnameh

This study tried to find the strategies applied in three English translations of the Battle of Rostam and Esfandiyar. To this aim, the source text (ST) was analyzed verse by verse with each verse being compared with its English translations to determine what procedures the translators had used to render the source text. Subsequently, the frequency of usage for each procedure was measured ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016